35 research outputs found

    An efficient scheduling method for grid systems based on a hierarchical stochastic petri net

    Full text link
    This paper addresses the problem of resource scheduling in a grid computing environment. One of the main goals of grid computing is to share system resources among geographically dispersed users, and schedule resource requests in an efficient manner. Grid computing resources are distributed, heterogeneous, dynamic, and autonomous, which makes resource scheduling a complex problem. This paper proposes a new approach to resource scheduling in grid computing environments, the hierarchical stochastic Petri net (HSPN). The HSPN optimizes grid resource sharing, by categorizing resource requests in three layers, where each layer has special functions for receiving subtasks from, and delivering data to, the layer above or below. We compare the HSPN performance with the Min-min and Max-min resource scheduling algorithms. Our results show that the HSPN performs better than Max-min, but slightly underperforms Min-min

    SETTI: A Self-supervised Adversarial Malware Detection Architecture in an IoT Environment

    Full text link
    In recent years, malware detection has become an active research topic in the area of Internet of Things (IoT) security. The principle is to exploit knowledge from large quantities of continuously generated malware. Existing algorithms practice available malware features for IoT devices and lack real-time prediction behaviors. More research is thus required on malware detection to cope with real-time misclassification of the input IoT data. Motivated by this, in this paper we propose an adversarial self-supervised architecture for detecting malware in IoT networks, SETTI, considering samples of IoT network traffic that may not be labeled. In the SETTI architecture, we design three self-supervised attack techniques, namely Self-MDS, GSelf-MDS and ASelf-MDS. The Self-MDS method considers the IoT input data and the adversarial sample generation in real-time. The GSelf-MDS builds a generative adversarial network model to generate adversarial samples in the self-supervised structure. Finally, ASelf-MDS utilizes three well-known perturbation sample techniques to develop adversarial malware and inject it over the self-supervised architecture. Also, we apply a defence method to mitigate these attacks, namely adversarial self-supervised training to protect the malware detection architecture against injecting the malicious samples. To validate the attack and defence algorithms, we conduct experiments on two recent IoT datasets: IoT23 and NBIoT. Comparison of the results shows that in the IoT23 dataset, the Self-MDS method has the most damaging consequences from the attacker's point of view by reducing the accuracy rate from 98% to 74%. In the NBIoT dataset, the ASelf-MDS method is the most devastating algorithm that can plunge the accuracy rate from 98% to 77%.Comment: 20 pages, 6 figures, 2 Tables, Submitted to ACM Transactions on Multimedia Computing, Communications, and Application

    Similarity-based Android Malware Detection Using Hamming Distance of Static Binary Features

    Full text link
    In this paper, we develop four malware detection methods using Hamming distance to find similarity between samples which are first nearest neighbors (FNN), all nearest neighbors (ANN), weighted all nearest neighbors (WANN), and k-medoid based nearest neighbors (KMNN). In our proposed methods, we can trigger the alarm if we detect an Android app is malicious. Hence, our solutions help us to avoid the spread of detected malware on a broader scale. We provide a detailed description of the proposed detection methods and related algorithms. We include an extensive analysis to asses the suitability of our proposed similarity-based detection methods. In this way, we perform our experiments on three datasets, including benign and malware Android apps like Drebin, Contagio, and Genome. Thus, to corroborate the actual effectiveness of our classifier, we carry out performance comparisons with some state-of-the-art classification and malware detection algorithms, namely Mixed and Separated solutions, the program dissimilarity measure based on entropy (PDME) and the FalDroid algorithms. We test our experiments in a different type of features: API, intent, and permission features on these three datasets. The results confirm that accuracy rates of proposed algorithms are more than 90% and in some cases (i.e., considering API features) are more than 99%, and are comparable with existing state-of-the-art solutions.Comment: 20 pages, 8 figures, 11 tables, FGCS Elsevier journa

    Scheduling Distributed Energy Resource Operation and Daily Power Consumption for a Smart Building to Optimize Economic and Environmental Parameters

    No full text
    In this paper, we address the problem of minimizing the total daily energy cost in a smart residential building composed of multiple smart homes with the aim of reducing the cost of energy bills and the greenhouse gas emissions under different system constraints and user preferences. As the household appliances contribute significantly to the energy consumption of the smart houses, it is possible to decrease electricity cost in buildings by scheduling the operation of domestic appliances. In this paper, we propose an optimization model for jointly minimizing electricity costs and CO2 emissions by considering consumer preferences in smart buildings that are equipped with distributed energy resources (DERs). Both controllable and uncontrollable tasks and DER operations are scheduled according to the real-time price of electricity and a peak demand charge to reduce the peak demand on the grid. We formulate the daily energy consumption scheduling problem in multiple smart homes from economic and environmental perspectives and exploit a mixed integer linear programming technique to solve it. We validated the proposed approach through extensive experimental analysis. The results of the experiment show that the proposed approach can decrease both CO2 emissions and the daily energy cost

    Task scheduling in grid computing based on Queen-bee algorithm

    No full text
    Grid computing is a new model that uses a network of processors connected together to perform bulk operations allows computations. Since it is possible to run multiple applications simultaneously may require multiple resources but often do not have the resources; so there is a scheduling system to allocate resources is essential. In view of the extent and distribution of resources in the grid computing, task scheduling is one of the major challenges in grid environment. Scheduling algorithms must be designed according to the current challenges in grid environment and they assign tasks to resource to decrease makespan which is generated. Because of the complex issues of scheduling tasks on the grid is deterministic algorithms work best for this offer. In this Paper, the Queen-bee algorithm is presented to solve the problem and the results have been compared to several other meta-heuristic algorithms. Also, it is shown that the proposed algorithm decline calculation time beside decreasing makespan compared to other algorithms

    Recent advances in cloud data centers toward fog data centers

    No full text
    In recent years, we have witnessed tremendous advances in cloud data centers (CDCs) from the point of view of the communication layer. A recent report from Cisco Systems Inc demonstrates that CDCs, which are distributed across many geographical locations, will dominate the global data center traffic flow for the foreseeable future. Their importance is highlighted by a top‐line projection from this forecast that by 2019, more than four‐fifths of total data center traffic will be Cloud traffic. The geographical diversity of the computing resources in CDCs provides several benefits, such as high availability, effective disaster recovery, uniform access to users in different regions, and access to different energy sources. Although Cloud technology is currently predominant, it is essential to leverage new agile software technologies, agile processes, and agile applications near to both the edge and the users; hence, the concept of Fog has been developed.Fog computing (FC) has emerged as an alternative to traditional Cloud computing to support geographically distributed latency‐sensitive and QoS‐aware IoT applications while reducing the burden on data centers used in traditional Cloud computing. In particular, FC with features that can support heterogeneity and real‐time applications (eg, low latency, location awareness, and the capacity to process a large number of nodes with wireless access) is an attractive solution for delay‐ and resource‐constrained large‐scale applications. The distinguishing feature of the FC paradigm is that a set of Fog nodes (FNs) spreads communication and computing resources over the wireless access network to provide resource augmentation to resource‐limited and energy‐limited wireless (possibly mobile) devices. The joint management of Fog and Internet of Technology (IoT) paradigms can reduce the energy consumption and operating costs of state‐of‐the‐art Fog‐based data centers (FDCs). An FDC is dedicated to supervising the transmission, distribution, and communication of FC. As a vital component of the Internet of Everything (IoE) environment, an FDC is capable of filtering and processing a considerable amount of incoming data on edge devices, by making the data processing architecture distributed and thereby scalable. An FDC therefore provides a platform for filtering and analyzing the data generated by sensors utilizing the resources of FNs.Increasing interest is emerging in FDCs and CDCs that allow the delivery of various kinds of agile services and applications over telecommunication networks and the Internet, including resource provisioning, data streaming/transcoding, analysis of high‐definition videos across the edge of the network, IoE application analysis, etc. Motivated by these issues, this special section solicits original research and practical contributions that advance the use of CDCs/FDCs in new technologies such as IoT, edge networks, and industries. Results obtained from simulations are validated in terms of their boundaries by experiments or analytical results. The main objectives of this special issue are to provide a discussion forum for people interested in Cloud and Fog networking and to present new models, adaptive tools, and applications specifically designed for distributed and parallel on‐demand requests received from (mobile) users and Cloud applications.These papers presented in this special issue provide insights in fields related to Cloud and Fog/edge architecture, including parallel processing of Cloudlets/Foglets, the presentation of new emerging models, performance evaluation and improvements, and developments in Cloud/Fog applications. We hope that readers can benefit from the insights in these papers, and contribute to these rapidly growing areas.</p

    Recent advances in cloud data centers toward fog data centers

    No full text
    In recent years, we have witnessed tremendous advances in cloud data centers (CDCs) from the point of view of the communication layer. A recent report from Cisco Systems Inc demonstrates that CDCs, which are distributed across many geographical locations, will dominate the global data center traffic flow for the foreseeable future. Their importance is highlighted by a top‐line projection from this forecast that by 2019, more than four‐fifths of total data center traffic will be Cloud traffic. The geographical diversity of the computing resources in CDCs provides several benefits, such as high availability, effective disaster recovery, uniform access to users in different regions, and access to different energy sources. Although Cloud technology is currently predominant, it is essential to leverage new agile software technologies, agile processes, and agile applications near to both the edge and the users; hence, the concept of Fog has been developed.Fog computing (FC) has emerged as an alternative to traditional Cloud computing to support geographically distributed latency‐sensitive and QoS‐aware IoT applications while reducing the burden on data centers used in traditional Cloud computing. In particular, FC with features that can support heterogeneity and real‐time applications (eg, low latency, location awareness, and the capacity to process a large number of nodes with wireless access) is an attractive solution for delay‐ and resource‐constrained large‐scale applications. The distinguishing feature of the FC paradigm is that a set of Fog nodes (FNs) spreads communication and computing resources over the wireless access network to provide resource augmentation to resource‐limited and energy‐limited wireless (possibly mobile) devices. The joint management of Fog and Internet of Technology (IoT) paradigms can reduce the energy consumption and operating costs of state‐of‐the‐art Fog‐based data centers (FDCs). An FDC is dedicated to supervising the transmission, distribution, and communication of FC. As a vital component of the Internet of Everything (IoE) environment, an FDC is capable of filtering and processing a considerable amount of incoming data on edge devices, by making the data processing architecture distributed and thereby scalable. An FDC therefore provides a platform for filtering and analyzing the data generated by sensors utilizing the resources of FNs.Increasing interest is emerging in FDCs and CDCs that allow the delivery of various kinds of agile services and applications over telecommunication networks and the Internet, including resource provisioning, data streaming/transcoding, analysis of high‐definition videos across the edge of the network, IoE application analysis, etc. Motivated by these issues, this special section solicits original research and practical contributions that advance the use of CDCs/FDCs in new technologies such as IoT, edge networks, and industries. Results obtained from simulations are validated in terms of their boundaries by experiments or analytical results. The main objectives of this special issue are to provide a discussion forum for people interested in Cloud and Fog networking and to present new models, adaptive tools, and applications specifically designed for distributed and parallel on‐demand requests received from (mobile) users and Cloud applications.These papers presented in this special issue provide insights in fields related to Cloud and Fog/edge architecture, including parallel processing of Cloudlets/Foglets, the presentation of new emerging models, performance evaluation and improvements, and developments in Cloud/Fog applications. We hope that readers can benefit from the insights in these papers, and contribute to these rapidly growing areas.</p
    corecore